42 research outputs found
Enhanced mobile computing using cloud resources
Summary in English.Includes bibliographical references.The purpose of this research is to investigate, review and analyse the use of cloud resources for the enhancement of mobile computing. Mobile cloud computing refers to a distributed computing relationship between a resource-constrained mobile device and a remote high-capacity cloud resource. Investigation of prevailing trends has shown that this will be a key technology in the development of future mobile computing systems. This research presents a theoretical analysis framework for mobile cloud computing. This analysis framework is a structured consolidation of the salient considerations identified in recent scientific literature and commercial endeavours. The use of this framework in the analysis of various mobile application domains has elucidated several significant benefits of mobile cloud computing including increases in system performance and efficiency. Based on recent scientific literature and commercial endeavours, various implementation approaches for mobile cloud computing have been identified, categorized and analysed according to their architectural characteristics. This has resulted in a set of advantages and disadvantages for each category of system architecture. Overall, through the development and application of the new analysis framework, this work provides a consolidated review and structured critical analysis of the current research and developments in the field of mobile cloud computing
PDoT: Private DNS-over-TLS with TEE Support
Security and privacy of the Internet Domain Name System (DNS) have been
longstanding concerns. Recently, there is a trend to protect DNS traffic using
Transport Layer Security (TLS). However, at least two major issues remain: (1)
how do clients authenticate DNS-over-TLS endpoints in a scalable and extensible
manner; and (2) how can clients trust endpoints to behave as expected? In this
paper, we propose a novel Private DNS-over-TLS (PDoT ) architecture. PDoT
includes a DNS Recursive Resolver (RecRes) that operates within a Trusted
Execution Environment (TEE). Using Remote Attestation, DNS clients can
authenticate, and receive strong assurance of trustworthiness of PDoT RecRes.
We provide an open-source proof-of-concept implementation of PDoT and use it to
experimentally demonstrate that its latency and throughput match that of the
popular Unbound DNS-over-TLS resolver.Comment: To appear: ACSAC 201
S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX
Function-as-a-Service (FaaS) is a recent and already very popular paradigm in
cloud computing. The function provider need only specify the function to be
run, usually in a high-level language like JavaScript, and the service provider
orchestrates all the necessary infrastructure and software stacks. The function
provider is only billed for the actual computational resources used by the
function invocation. Compared to previous cloud paradigms, FaaS requires
significantly more fine-grained resource measurement mechanisms, e.g. to
measure compute time and memory usage of a single function invocation with
sub-second accuracy. Thanks to the short duration and stateless nature of
functions, and the availability of multiple open-source frameworks, FaaS
enables non-traditional service providers e.g. individuals or data centers with
spare capacity. However, this exacerbates the challenge of ensuring that
resource consumption is measured accurately and reported reliably. It also
raises the issues of ensuring computation is done correctly and minimizing the
amount of information leaked to service providers.
To address these challenges, we introduce S-FaaS, the first architecture and
implementation of FaaS to provide strong security and accountability guarantees
backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our
design introduces a new key distribution enclave and a novel transitive
attestation protocol. A core contribution of S-FaaS is our set of resource
measurement mechanisms that securely measure compute time inside an enclave,
and actual memory allocations. We have integrated S-FaaS into the popular
OpenWhisk FaaS framework. We evaluate the security of our architecture, the
accuracy of our resource measurement mechanisms, and the performance of our
implementation, showing that our resource measurement mechanisms add less than
6.3% latency on standardized benchmarks
HardScope: Thwarting DOP with Hardware-assisted Run-time Scope Enforcement
Widespread use of memory unsafe programming languages (e.g., C and C++)
leaves many systems vulnerable to memory corruption attacks. A variety of
defenses have been proposed to mitigate attacks that exploit memory errors to
hijack the control flow of the code at run-time, e.g., (fine-grained)
randomization or Control Flow Integrity. However, recent work on data-oriented
programming (DOP) demonstrated highly expressive (Turing-complete) attacks,
even in the presence of these state-of-the-art defenses. Although multiple
real-world DOP attacks have been demonstrated, no efficient defenses are yet
available. We propose run-time scope enforcement (RSE), a novel approach
designed to efficiently mitigate all currently known DOP attacks by enforcing
compile-time memory safety constraints (e.g., variable visibility rules) at
run-time. We present HardScope, a proof-of-concept implementation of
hardware-assisted RSE for the new RISC-V open instruction set architecture. We
discuss our systematic empirical evaluation of HardScope which demonstrates
that it can mitigate all currently known DOP attacks, and has a real-world
performance overhead of 3.2% in embedded benchmarks
Migrating SGX Enclaves with Persistent State
Hardware-supported security mechanisms like Intel Software Guard Extensions
(SGX) provide strong security guarantees, which are particularly relevant in
cloud settings. However, their reliance on physical hardware conflicts with
cloud practices, like migration of VMs between physical platforms. For
instance, the SGX trusted execution environment (enclave) is bound to a single
physical CPU.
Although prior work has proposed an effective mechanism to migrate an
enclave's data memory, it overlooks the migration of persistent state,
including sealed data and monotonic counters; the former risks data loss whilst
the latter undermines the SGX security guarantees. We show how this can be
exploited to mount attacks, and then propose an improved enclave migration
approach guaranteeing the consistency of persistent state. Our software-only
approach enables migratable sealed data and monotonic counters, maintains all
SGX security guarantees, minimizes developer effort, and incurs negligible
performance overhead
LO-FAT: Low-Overhead Control Flow ATtestation in Hardware
Attacks targeting software on embedded systems are becoming increasingly
prevalent. Remote attestation is a mechanism that allows establishing trust in
embedded devices. However, existing attestation schemes are either static and
cannot detect control-flow attacks, or require instrumentation of software
incurring high performance overheads. To overcome these limitations, we present
LO-FAT, the first practical hardware-based approach to control-flow
attestation. By leveraging existing processor hardware features and
commonly-used IP blocks, our approach enables efficient control-flow
attestation without requiring software instrumentation. We show that our
proof-of-concept implementation based on a RISC-V SoC incurs no processor
stalls and requires reasonable area overhead.Comment: Authors' pre-print version to appear in DAC 2017 proceeding
An Empirical Study & Evaluation of Modern CAPTCHAs
For nearly two decades, CAPTCHAs have been widely used as a means of
protection against bots. Throughout the years, as their use grew, techniques to
defeat or bypass CAPTCHAs have continued to improve. Meanwhile, CAPTCHAs have
also evolved in terms of sophistication and diversity, becoming increasingly
difficult to solve for both bots (machines) and humans. Given this
long-standing and still-ongoing arms race, it is critical to investigate how
long it takes legitimate users to solve modern CAPTCHAs, and how they are
perceived by those users.
In this work, we explore CAPTCHAs in the wild by evaluating users' solving
performance and perceptions of unmodified currently-deployed CAPTCHAs. We
obtain this data through manual inspection of popular websites and user studies
in which 1,400 participants collectively solved 14,000 CAPTCHAs. Results show
significant differences between the most popular types of CAPTCHAs:
surprisingly, solving time and user perception are not always correlated. We
performed a comparative study to investigate the effect of experimental context
-- specifically the difference between solving CAPTCHAs directly versus solving
them as part of a more natural task, such as account creation. Whilst there
were several potential confounding factors, our results show that experimental
context could have an impact on this task, and must be taken into account in
future CAPTCHA studies. Finally, we investigate CAPTCHA-induced user task
abandonment by analyzing participants who start and do not complete the task.Comment: Accepted at USENIX Security 202